Goto

Collaborating Authors

 turing award


Andrew Barto and Richard Sutton win 2024 Turing Award

AIHub

The Association for Computing Machinery, has named Andrew Barto and Richard Sutton as the recipients of the 2024 ACM A.M. Turing Award. The pair have received the honour for "developing the conceptual and algorithmic foundations of reinforcement learning". In a series of papers beginning in the 1980s, Barto and Sutton introduced the main ideas, constructed the mathematical foundations, and developed important algorithms for reinforcement learning. The Turing Award comes with a 1 million prize, to be split between the recipients. Since its inception in 1966, the award has honoured computer scientists and engineers on a yearly basis.


Pioneers of Reinforcement Learning Win the Turing Award

WIRED

In the 1980s, Andrew Barto and Rich Sutton were considered eccentric devotees to an elegant but ultimately doomed idea--having machines learn, as humans and animals do, from experience. Decades on, with the technique they pioneered now increasingly critical to modern artificial intelligence and programs like ChatGPT, Barto and Sutton have been awarded the Turing Award, the highest honor in the field of computer science. Barto, a professor emeritus at the University of Massachusetts Amherst, and Sutton, a professor at the University of Alberta, trailblazed a technique known as reinforcement learning, which involves coaxing a computer to perform tasks through experimentation combined with either positive or negative feedback. "When this work started for me, it was extremely unfashionable," Barto recalls with a smile, speaking over Zoom from his home in Massachusetts. "It's been remarkable that [it has] achieved some influence and some attention," Barto adds.


In Memoriam: E. Allen Emerson

Communications of the ACM

E. Allen Emerson was the first graduate student of Edmund M. Clarke at Harvard University. After discussing several ideas for Allen's dissertation, they identified a promising candidate: verifying a finite-state system against a formal specification. According to Martha Clarke, Edmund's widow, it was during a walk across Harvard Yard that they decided to call it "model checking." Emerson received his Ph.D. in applied mathematics for this work in 1981. Twenty-five years later, he and Clarke (along with Joseph Sifakis) shared the ACM A.M. Turing Award in 2007 for this and related work.


Geoffrey Hinton, AI pioneer and figurehead of doomerism, wins Nobel Prize

MIT Technology Review

Hinton shares the award with fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that could store and reconstruct data. Hinton built on this technology, known as a Hopfield network, to develop backpropagation, an algorithm that lets neural networks learn. Hopfield and Hinton borrowed methods from physics, especially statistical techniques, to develop their approaches. In the words of the Nobel Prize committee, the pair are recognized "for foundational discoveries and inventions that enable machine learning with artificial neural networks." But since May 2023, when MIT Technology Review helped break the news that Hinton was now scared of the technology that he had helped bring about, the 76-year-old scientist has become much better known as a figurehead for doomerism--the idea that there's a very real risk that near-future AI could precipitate catastrophic events, up to and including human extinction.


QirK: Question Answering via Intermediate Representation on Knowledge Graphs

Scheerer, Jan Luca, Lykov, Anton, Kayali, Moe, Fountalis, Ilias, Olteanu, Dan, Vasiloglou, Nikolaos, Suciu, Dan

arXiv.org Artificial Intelligence

We demonstrate QirK, a system for answering natural language questions on Knowledge Graphs (KG). QirK can answer structurally complex questions that are still beyond the reach of emerging Large Language Models (LLMs). It does so using a unique combination of database technology, LLMs, and semantic search over vector embeddings. The glue for these components is an intermediate representation (IR). The input question is mapped to IR using LLMs, which is then repaired into a valid relational database query with the aid of a semantic search on vector embeddings. This allows a practical synthesis of LLM capabilities and KG reliability. A short video demonstrating QirK is available at https://youtu.be/6c81BLmOZ0U.


Efficient Parallel Multi-Hop Reasoning: A Scalable Approach for Knowledge Graph Analysis

Tithi, Jesmin Jahan, Checconi, Fabio, Petrini, Fabrizio

arXiv.org Artificial Intelligence

Multi-hop reasoning (MHR) is a process in artificial intelligence and natural language processing where a system needs to make multiple inferential steps to arrive at a conclusion or answer. In the context of knowledge graphs or databases, it involves traversing multiple linked entities and relationships to understand complex queries or perform tasks requiring a deeper understanding. Multi-hop reasoning is a critical function in various applications, including question answering, knowledge base completion, and link prediction. It has garnered significant interest in artificial intelligence, machine learning, and graph analytics. This paper focuses on optimizing MHR for time efficiency on large-scale graphs, diverging from the traditional emphasis on accuracy which is an orthogonal goal. We introduce a novel parallel algorithm that harnesses domain-specific learned embeddings to efficiently identify the top K paths between vertices in a knowledge graph to find the best answers to a three-hop query. Our contributions are: (1) We present a new parallel algorithm to enhance MHR performance, scalability and efficiency. (2) We demonstrate the algorithm's superior performance on leading-edge Intel and AMD architectures through empirical results. We showcase the algorithm's practicality through a case study on identifying academic affiliations of potential Turing Award laureates in Deep Learning, highlighting its capability to handle intricate entity relationships. This demonstrates the potential of our approach to enabling high-performance MHR, useful to navigate the growing complexity of modern knowledge graphs.


Mathematician wins Turing award for harnessing randomness

New Scientist

The mathematician Avi Wigderson has won the 2023 Turing award, often referred to as the Nobel prize for computing, for his work on understanding how randomness can shape and improve computer algorithms. Wigderson, who also won the prestigious Abel prize in 2021 for his mathematical contributions to computer science, was taken aback by the award. "The [Turing] committee fooled me into believing that we were going to have some conversation about collaborating," he says. "When I zoomed in, the whole committee was there and they told me. I was excited, surprised and happy."



AI should be 'a global priority alongside pandemics and nuclear war',' new letter states

Daily Mail - Science & tech

A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.


Always Improving Performance

Communications of the ACM

As a young man, Jack Dongarra thought he would probably teach science to high school students. That was his plan when he enrolled at Chicago State College, which had become Chicago State University by the time he graduated in 1972. Over the course of his studies, he began to be fascinated by computers. In his senior year, physics professor Harvey Leff suggested he apply for an internship at nearby Argonne National Laboratory, where he could gain some computing experience. There, Dongarra joined a group developing EISPACK, a software library for calculating eigenvalues, components of linear algebra that are important to performing simulations of chemistry and physics.